MMPC: Feature selection algorithm for identifying minimal feature subsets
SES(target, dataset, max_k = 3, threshold = 0.05, test = NULL, ini = NULL, user_test = NULL, hash = FALSE, hashObject = NULL, robust = FALSE, ncores = 1)
MMPC(target, dataset, max_k = 3, threshold = 0.05, test = NULL, ini = NULL, user_test = NULL, hash = FALSE, hashObject = NULL, robust = FALSE, ncores = 1, backward = FALSE)
CondIndTests
.
Important: the generated hashObjects should be used only when the same dataset is re-analyzed, possibly with different values of max_k and threshold.
The MMPC function mplements the MMPC algorithm as presented in "Tsamardinos, Brown and Aliferis. The max-min hill-climbing Bayesian network structure learning algorithm" http://www.dsl-lab.org/supplements/mmhc_paper/paper_online.pdf
For faster computations in the internal SES functions, install the suggested package "gRbase". In addition, the output value "univ" along with the output value "hashObject" can speed up the computations of subesequent runs of SES and MMPC. The first run with a specific pair of hyper-parameters (threshold and max_k) the univariate associations tests and the conditional independence tests (test statistic and logarithm of their corresponding p-values) are stored and returned. In the next run(s) with different pair(s) of hyper-parameters you can use this information to save time. With a few thousands of variables you will see the difference, which can be up to 50%. For the non robust correlation based tests, the difference may not be significant though, because a Fortran code is used to extract the (unconditional) correlation coefficients.
The max_k option: the maximum size of the conditioning set to use in the conditioning independence test. Larger values provide more accurate results, at the cost of higher computational times. When the sample size is small (e.g., $<50$ observations)="" the="" max_k="" parameter="" should="" be="" $="" \leq="" 5$,="" otherwise="" conditional="" independence="" test="" may="" not="" able="" to="" provide="" reliable="" results.<="" p="">
If the dataset (predictor variables) contains missing (NA) values, they will automatically be replaced by the current variable (column) mean value with an appropriate warning to the user after the execution.
If the target is a single integer value or a string, it has to corresponds to the column number or to the name of the target feature in the dataset. In any other case the target is a variable that is not contained in the dataset.
If the current 'test' argument is defined as NULL or "auto" and the user_test argument is NULL then the algorithm automatically selects the best test based on the type of the data. Particularly:
Conditional independence test functions to be pass through the user_test argument should have the same signature of the included test. See testIndFisher
for an example.
For all the available conditional independence tests that are currently included on the package, please see CondIndTests
.
If two or more p-values are below the machine epsilon (.Machine$double.eps which is equal to 2.220446e-16), all of them are set to 0. To make the comparison or the ordering feasible we use the logarithm of the p-value. The max-min heuristic though, requires comparison and an ordering of the p-values. Hence, all conditional independence tests calculate the logarithm of the p-value.
If there are missing values in the dataset (predictor variables) columnwise imputation takes place. The median is used for the continuous variables and the mode for categorical variables. It is a naive and not so clever method. For this reason the user is encouraged to make sure his data contain no missing values.
If you have percentages, in the (0, 1) interval, they are automatically mapped into $R$ by using the logit transformation. If you set the test to testIndBeta
, beta regression is used. If you have compositional data, positive multivariate data where each vector sums to 1, with NO zeros, they are also mapped into the Euclidean space using the additive log-ratio (multivariate logit) transformation (Aitchison, 1986).
If you use testIndSpearman (argument "test"), the ranks of the data calculated and those are used in the caclulations. This speeds up the whole procedure.
50$>Tsamardinos, Brown and Aliferis (2006). The max-min hill-climbing Bayesian network structure learning algorithm. Machine learning, 65(1), 31-78.
CondIndTests, cv.ses
set.seed(123)
#require(gRbase) #for faster computations in the internal functions
require(hash)
#simulate a dataset with continuous data
dataset <- matrix(runif(1000 * 1000, 1, 100), ncol = 1000)
#define a simulated class variable
target <- 3 * dataset[, 10] + 2 * dataset[, 200] + 3 * dataset[, 20] + rnorm(1000, 0, 5)
#define some simulated equivalences
dataset[, 15] <- dataset[, 10] + rnorm(1000, 0, 2)
dataset[, 10] <- dataset[ , 10] + rnorm(1000, 0, 2)
dataset[, 250] <- dataset[, 200] + rnorm(1000, 0, 2)
dataset[, 230] <- dataset[, 200] + rnorm(1000, 0, 2)
require("hash", quietly = TRUE)
#run the SES algorithm
sesObject <- SES(target , dataset, max_k = 5, threshold = 0.05, test = "testIndFisher",
hash = TRUE, hashObject = NULL);
#print summary of the SES output
summary(sesObject);
#plot the SES output
plot(sesObject, mode = "all");
#get the queues with the equivalences for each selected variable
sesObject@queues
#get the generated signatures
sesObject@signatures;
# re-run the SES algorithm with the same or different configuration
# under the hash-based implementation of retrieving the statistics
# in the SAME dataset (!important)
hashObj <- sesObject@hashObject;
sesObject2 <- SES(target, dataset, max_k = 2, threshold = 0.01, test = "testIndFisher",
hash = TRUE, hashObject = hashObj);
sesObject3 <- SES(target, dataset, max_k = 2, threshold = 0.01, test = "testIndFisher",
ini = sesObject@univ, hash = TRUE, hashObject = hashObj);
# retrieve the results: summary, plot, sesObject2@...)
summary(sesObject2)
# get the run time
sesObject@runtime;
sesObject2@runtime;
sesObject3@runtime;
# MMPC algorithm
mmpcObject <- MMPC(target , dataset , max_k=3 , threshold=0.05 , test="testIndFisher",
hash = FALSE, hashObject=NULL);
mmpcObject@selectedVars
mmpcObject@runtime
Run the code above in your browser using DataLab